Preschool
Title
In this section, we formalize and substantiate the claims of Theorem 1. Theorem 1 has three parts, which we address in the following sections. First, in Section A.2, we show that the classifier makes progress during the early-learning phase: over the first T iterations, the gradient is well correlated with v and the accuracy on mislabeled examples increases. However, as noted in the main text, this early progress halts because the gradient terms corresponding to correctly labeled examples begin to disappear. We prove this rigorously in Section A.3, which shows that the overall magnitude of the gradient terms corresponding to correctly labeled examples shrinks over the first T iterations. Finally, in Section A.4, we prove the claimed asymptotic behavior: as t!1, gradient descent perfectly memorizes the noisy labels.
Title
We propose a novel framework to perform classification via deep learning in the presence of noisy annotations. When trained on noisy labels, deep neural networks have been observed to first fit the training data with clean labels during an "early learning" phase, before eventually memorizing the examples with false labels. We prove that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and give a theoretical explanation in this setting. Motivated by these findings, we develop a new technique for noisy classification tasks, which exploits the progress of the early learning phase. In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization. There are two key elements to our approach. First, we leverage semi-supervised learning techniques to produce target probabilities based on the model outputs. Second, we design a regularization term that steers the model towards these targets, implicitly preventing memorization of the false labels. The resulting framework is shown to provide robustness to noisy annotations on several standard benchmarks and real-world datasets, where it achieves results comparable to the state of the art.
Review for NeurIPS paper: Early-Learning Regularization Prevents Memorization of Noisy Labels
Weaknesses: I have many reservation against the claims of the paper. I would appreciate it if the authors can clarify some of these issues during their rebuttal. First, the proof of their main theorem about logistic regression has many issues. One key issue is that the authors make assumptions within the proof that are not clearly stated or justified upfront. For example, in Line 440 in the supplementary materials, the proof assumes that theta Tv .1.
Review for NeurIPS paper: Early-Learning Regularization Prevents Memorization of Noisy Labels
The paper studies the following interesting phenomenon (observed in the previous literature): when trained on the dataset with incorrectly labeled points (i.e. "label noise"), DNNs first learn the benign ("correctly labeled") points and once this is done they start "memorizing" the noisy points. It was previously shown in the literature (empirically) that the second "memorization" phase hurts the generalization. The authors make 2 Contributions: (Contribution 1) They demonstrate (empirically and theoretically) that similar phenomenon can be observed in the simpler setting of the over-parametrized (dimensionality number of points) linear two-class logistic regression, when the class distributions are isotropic Gaussian with fixed means \pm mu and vanishing variance (see Theorem 1 and Figure A.1). (Contribution 2) Motivated by the theory of contribution 1, the authors propose a novel regularizer. When used in the vanilla DNN training with the cross-entropy loss, this regularizer successfully prevents the networks from falling to the "memorization phase" (as evidenced by Figure 1). All the reviewers agree that the topic and the focus of this paper is very timely.
Reviews: Time Matters in Regularizing Deep Networks: Weight Decay and Data Augmentation Affect Early Learning Dynamics, Matter Little Near Convergence
The paper is well-written and the authors are clear about their claims. The idea of critical periods during training with reference to regularization is interesting. If true, this would give a different way to think about generalization. The authors have performed a number of experiments with different configurations. Although, there are deficiencies mentioned below.
Time Matters in Regularizing Deep Networks: Weight Decay and Data Augmentation Affect Early Learning Dynamics, Matter Little Near Convergence
Regularization is typically understood as improving generalization by altering the landscape of local extrema to which the model eventually converges. Deep neural networks (DNNs), however, challenge this view: We show that removing regularization after an initial transient period has little effect on generalization, even if the final loss landscape is the same as if there had been no regularization. In some cases, generalization even improves after interrupting regularization. Conversely, if regularization is applied only after the initial transient, it has no effect on the final solution, whose generalization gap is as bad as if regularization never happened. This suggests that what matters for training deep networks is not just whether or how, but when to regularize. The phenomena we observe are manifest in different datasets (CIFAR-10, CIFAR-100, SVHN, ImageNet), different architectures (ResNet-18, All-CNN), different regularization methods (weight decay, data augmentation, mixup), different learning rate schedules (exponential, piece-wise constant). They collectively suggest that there is a "critical period" for regularizing deep networks that is decisive of the final performance. More analysis should, therefore, focus on the transient rather than asymptotic behavior of learning.
Reviews: Time Matters in Regularizing Deep Networks: Weight Decay and Data Augmentation Affect Early Learning Dynamics, Matter Little Near Convergence
The paper describes how regularization matters in different ways during different parts of the training processs, i.e., the timing is important for the regularization to be effective. Reviewers have several suggestions, which should be incorporated to the extent possible, but the ideas/results shoule be of interest to members of the community.
Early-Learning Regularization Prevents Memorization of Noisy Labels
We propose a novel framework to perform classification via deep learning in the presence of noisy annotations. When trained on noisy labels, deep neural networks have been observed to first fit the training data with clean labels during an "early learning" phase, before eventually memorizing the examples with false labels. We prove that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and give a theoretical explanation in this setting. Motivated by these findings, we develop a new technique for noisy classification tasks, which exploits the progress of the early learning phase. In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization. There are two key elements to our approach.
Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning
Li, Xiaochuan, Yu, Zichun, Xiong, Chenyan
Synthetic data has been widely used to train large language models, but their generative nature inevitably introduces noisy, non-informative, and misleading learning signals. In this paper, we propose Montessori-Instruct, a novel data synthesis framework that tailors the data synthesis ability of the teacher language model toward the student language model's learning process. Specifically, we utilize local data influence of synthetic training data points on students to characterize students' learning preferences. Then, we train the teacher model with Direct Preference Optimization (DPO) to generate synthetic data tailored toward student learning preferences. Experiments with Llama3-8B-Instruct (teacher) and Llama3-8B (student) on Alpaca Eval and MT-Bench demonstrate that Montessori-Instruct significantly outperforms standard synthesis methods by 18.35\% and 46.24\% relatively. Our method also beats data synthesized by a stronger teacher model, GPT-4o. Further analysis confirms the benefits of teacher's learning to generate more influential training data in the student's improved learning, the advantages of local data influence in accurately measuring student preferences, and the robustness of Montessori-Instruct across different student models. Our code and data are open-sourced at https://github.com/cxcscmu/Montessori-Instruct.
Time Matters in Regularizing Deep Networks: Weight Decay and Data Augmentation Affect Early Learning Dynamics, Matter Little Near Convergence
Regularization is typically understood as improving generalization by altering the landscape of local extrema to which the model eventually converges. Deep neural networks (DNNs), however, challenge this view: We show that removing regularization after an initial transient period has little effect on generalization, even if the final loss landscape is the same as if there had been no regularization. In some cases, generalization even improves after interrupting regularization. Conversely, if regularization is applied only after the initial transient, it has no effect on the final solution, whose generalization gap is as bad as if regularization never happened. This suggests that what matters for training deep networks is not just whether or how, but when to regularize. The phenomena we observe are manifest in different datasets (CIFAR-10, CIFAR-100, SVHN, ImageNet), different architectures (ResNet-18, All-CNN), different regularization methods (weight decay, data augmentation, mixup), different learning rate schedules (exponential, piece-wise constant).